White-box models expose their internal structues, parameters and processes, in ways that are understandable to humans. The latter is crucial as deep neural networks could, in principle, be interogated; however, without using techniques of explainable AI, the billions of weights are totally incomprehensible, even for experts.
Used in Chap. 21: pages 337, 340
Also known as white-box learning